Efficiently exploring for human robot interaction: partially observable Poisson processes
نویسندگان
چکیده
Abstract Consider a mobile robot exploring an office building with the aim of observing as much human activity possible over several days. It must learn where and when people are to be found, count observed activities, revisit popular places at right time. In this paper we present series Bayesian estimators for levels that improve on simple counting. We then show how these can used drive efficient exploration activities. The arise from modelling counts partially observable Poisson process (POPP). This presents novel extensions POPP following cases: (i) robot’s sensors correlated, (ii) sensor model, itself built data, is also unreliable, (iii) both combined. combines resulting simple, but effective solution exploration-exploitation trade-off faced by in real deployment. A 15 day deployments our approach boosts number activities 70% relative baseline produces more accurate estimates level each place
منابع مشابه
Finite Horizon Decision Timing with Partially Observable Poisson Processes
We study decision timing problems on finite horizon with Poissonian information arrivals. In our model, a decision maker wishes to optimally time her action in order to maximize her expected reward. The reward depends on an unobservable Markovian environment, and information about the environment is collected through a (compound) Poisson observation process. Examples of such systems arise in in...
متن کاملPartially observable Markov decision processes
For reinforcement learning in environments in which an agent has access to a reliable state signal, methods based on the Markov decision process (MDP) have had many successes. In many problem domains, however, an agent suffers from limited sensing capabilities that preclude it from recovering a Markovian state signal from its perceptions. Extending the MDP framework, partially observable Markov...
متن کاملRobot Planning in Partially Observable Continuous Domains
We present a value iteration algorithm for learning to act in Partially Observable Markov Decision Processes (POMDPs) with continuous state spaces. Mainstream POMDP research focuses on the discrete case and this complicates its application to, e.g., robotic problems that are naturally modeled using continuous state spaces. The main difficulty in defining a (belief-based) POMDP in a continuous s...
متن کاملProbabilistic Robot Navigation in Partially Observable Environments
Autonomous mobile robots need very reliable navigation capabilities in order to operate unattended for long periods of time. This paper reports on first results of a research program that uses partially observable Markov models to robustly track a robot’s location in office environments and to direct its goal-oriented actions. The approach explicitly maintains a probability distribution over th...
متن کاملEfficiently Explaining Deterministic Exogenous Events in Partially Observable Environments
We consider the problem of continual planning (DesJardins et al. 1999) in hazardous partiallyobservable dynamic environments, where deterministic exogenous events that cannot be directly observed affect the state of the world and no plan can be guaranteed to succeed. In these environments, limited observability makes state transitions ambiguous and difficult to predict. To resolve this ambiguit...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Autonomous Robots
سال: 2022
ISSN: ['0929-5593', '1573-7527']
DOI: https://doi.org/10.1007/s10514-022-10070-9